home *** CD-ROM | disk | FTP | other *** search
- Date: Tue, 30 Mar 93 11:21:22 -0800
- From: ersmith@netcom.com (Eric R. Smith)
- Message-Id: <9303301921.AA12233@netcom4.netcom.com>
- To: mint@atari.archive.umich.edu
- Subject: shared libraries for MiNT
-
- A Proposal for Implementing Shared Libraries
-
- I think I've finally figured out a "good" way to implement shared
- libraries (i.e. low overhead, doesn't need VM, requires few changes
- to existing applications). Here's my proposal; please let me know what
- you think. (In case it isn't obvious, this is *very* far from being
- cast in stone :-). I do think we need to do shared libraries soon,
- though.)
-
- A shared library will be implemented as a DRI format object file, with
- the GST long name symbol table. A program linked with shared libraries
- will have the same format, but with an additional header prepended
- which gives the names and version numbers of the shared libraries it
- requires.
-
- Both the libraries and the programs will be compiled to use
- register A5 as a base register (e.g. when compiled with gcc they will
- be compiled with -mbaserel). They need not be position independent;
- the libraries will appear at the same virtual address (determined at
- load time) in every process, and programs will be relocated at load
- time by the kernel.
-
- The data and bss segments of a given shared library will always be
- located at the same (relative) offset in the data/bss area of
- a program using that library. (Note that I'm going to call the
- "data/bss area" just the "data segment" from here on in, because the
- bss is just a special part of the data segment from the kernel's point
- of view.)
-
- For example, let's consider 2 programs, BAR and FOO. BAR uses libraries A,
- B, and C; FOO uses A, C, and D.
-
- BAR's data segment will look like this: (assuming that A, B, C, and D are
- the first 4 libraries loaded, and were loaded in that order)
-
- ------------------------------------------------------------------------------
- | A's data | B's data | C's data | BAR's data |
- ------------------------------------------------------------------------------
-
- FOO's data segment will look like this:
-
- ------------------------------------------------------------------------------
- | A's data | FOO's data | C's data | D's data | more of FOO's data |
- ------------------------------------------------------------------------------
-
- Note that FOO's data segment is split up. This is because library C expects
- its data to come at a certain offset (after A's and B's), and so C's data
- segment in process FOO must start at that offset. Since FOO doesn't use
- library B, the part of its data segment that B would normally use is
- available for FOO's use. The kernel will be responsible for finding
- such "holes" and taking advantage of them where possible. (This may
- actually turn out to be tricky, since arrays will have to be contiguous.)
- We also may want to provide a way to specify that certain libraries are
- mutually exclusive. In the example above, if libraries B and D were
- mutually exclusive, then D's data could occupy the same offsets as B's
- (or a subset thereof, if D has less data).
-
- Does this make sense? The key thing is that since everyone is using
- register A5 as a base register, the libraries can always find their
- data (at the particular fixed offset into the data segment assigned to
- them).
-
- The disadvantage of this scheme is that once more than 64K of data is
- filled up, libraries and/or programs that use 16 bit offsets will be
- in trouble. There are ways around this, of course.
-
- Another disadvantage is that program load times will be longer, since
- the kernel is going to have to do the relocation and symbol resolving.
-
- An alternative would be to use something like Sun's global offset table.
- That scheme is slower, though, since it adds another layer of indirection
- to variable references.
-
- Please let me know your thoughts on this matter.
-
- Eric
-
-
-
-